通过建立神经网络和内核方法之间的联系,无限宽度极限阐明了深度学习的概括和优化方面。尽管它们的重要性,但这些内核方法的实用性在大规模学习设置中受到限制,因为它们(超)二次运行时和内存复杂性。此外,大多数先前关于神经内核的作品都集中在relu激活上,这主要是由于其受欢迎程度,但这也是由于很难计算此类内核来进行一般激活。在这项工作中,我们通过提供进行一般激活的方法来克服此类困难。首先,我们编译和扩展激活功能的列表,该函数允许精确的双重激活表达式计算神经内核。当确切的计算未知时,我们提出有效近似它们的方法。我们提出了一种快速的素描方法,该方法近似于任何多种多层神经网络高斯过程(NNGP)内核和神经切线核(NTK)矩阵,以实现广泛的激活功能,这超出了常见的经过分析的RELU激活。这是通过显示如何使用任何所需激活函​​数的截短的Hermite膨胀来近似神经内核来完成的。虽然大多数先前的工作都需要单位球体上的数据点,但我们的方法不受此类限制的影响,并且适用于$ \ Mathbb {r}^d $中的任何点数据集。此外,我们为NNGP和NTK矩阵提供了一个子空间嵌入,具有接近输入的距离运行时和接近最佳的目标尺寸,该目标尺寸适用于任何\ EMPH {均质}双重激活功能,具有快速收敛的Taylor膨胀。从经验上讲,关于精确的卷积NTK(CNTK)计算,我们的方法可实现$ 106 \ times $速度,用于在CIFAR-10数据集上的5层默特网络的近似CNTK。
translated by 谷歌翻译
确定点过程(DPP)是一个优雅的模型,可以为$ n $项目集合的每个子集分配概率。虽然传统上,DPP由对称内核矩阵进行参数化,从而消除了对称约束,从而导致非对称DPP(NDPP),从而导致建模功率和预测性能的显着改善。最近的工作研究了Markov Chain Monte Carlo(MCMC)对NDPPS的采样算法,该算法仅限于Size-$ K $子集(称为$ K $ -NDPPS)。但是,这种方法的运行时间在$ n $中是二次的,因此对于大规模设置而言,它是不可行的。在这项工作中,我们为$ k $ -ndpps提供了可扩展的MCMC采样算法,并具有低级内核,从而使运行时具有sublinear,in $ n $。我们的方法基于一种最新的NDPP排斥抽样算法,我们通过一种有效构建建议分布的新方法来增强该算法。此外,我们将可扩展的$ K $ -NDPP采样算法扩展到没有大小约束的情况下。我们最终的采样方法在内核等级中具有多项式时间复杂性,而现有方法的运行时为指数在等级中。通过对现实世界数据集的理论分析和实验,我们验证我们的可扩展近似采样算法比现有的$ k $ -ndpps和ndpps的现有采样方法快的阶数。
translated by 谷歌翻译
Fokker-Planck方程(FPE)是控制IT \^o过程密度演变的部分微分方程,并且对统计物理学和机器学习的文献非常重要。 FPE可以被视为连续性方程,其中密度的变化完全由时间变化的速度场决定。重要的是,此速度场也取决于当前密度函数。结果,可以证明地面真相速度字段是固定点方程的解决方案,即我们称之为自洽的属性。在本文中,我们利用这一概念来设计假设速度字段的潜在功能,并证明,如果在训练过程中这样的功能减少到零,则假设速度场产生的密度轨迹会收敛到解决方案转化为解决方案。 Wasserstein-2的FPE。所提出的潜在函数可与基于神经网络的参数化相提并论,因为可以有效地计算相对于参数的随机梯度。一旦训练了一个参数化模型,例如神经普通微分方程,我们就可以生成FPE的整个轨迹。
translated by 谷歌翻译
在插值方面,我们为平滑损失(可能是非lipschitz,可能是非convex)提供了急剧依赖路径依赖的概括和多余的风险保证。我们分析的核心是确定性对称算法绑定的新的概括误差,这意味着平均输出稳定性和终止时有界的预期优化误差导致概括。该结果表明,沿着优化路径发生小的概括误差,并使我们能够绕过Lipschitz或以前作品中普遍存在的损失的假设。对于非convex,polyak-lojasiewicz(PL),凸面和强烈凸丢失,我们在累积的路径依赖性优化误差,终端优化误差,样本数量和迭代数方面显示了概括误差的明确依赖性。 For nonconvex smooth losses, we prove that full-batch GD efficiently generalizes close to any stationary point at termination, under the proper choice of a decreasing step size.此外,如果损失是非convex但目标是PL,我们将在概括误差和相应的多余风险上四次消失,以选择大型常数步长大小。对于(分别 - 强 - )凸平的平滑损失,我们证明,全批GD还概括了较大的恒定步骤尺寸,并且在快速训练的同时,(分别是四次)的多余风险。在所有情况下,我们通过显示匹配的概括和优化错误率来缩小概括误差差距。当损失平稳时(但可能是非lipschitz)时,我们的全批GD概括误差和多余的风险界限严格比(随机)GD的现有范围更紧密。
translated by 谷歌翻译
在本文中,我们通过图形函数的关键代数条件(称为\ textIt {置换兼容性})完全回答上述问题,该函数将图形和图形的特征​​与功能约束相关联。我们证明:(i)GNN作为图形函数必然是兼容的; (ii)相反,当限制具有不同节点特征的输入图上时,任何置换兼容函数都可以由GNN生成; (iii)对于任意节点特征(不一定是不同),一个简单的功能增强方案足以生成GNN置换兼容函数; (iv)可以通过仅检查二次功能约束,而不是对所有排列的详尽搜索来验证置换兼容性; (v)GNN可以生成\ textIt {any}图形函数,一旦我们以节点身份增强节点特征,从而超越了图同构和置换兼容性。上面的表征铺平了正式研究GNN和其他算法程序之间复杂联系的路径。例如,我们的表征意味着许多自然图问题,例如最小值值,最大流量值,最大值尺寸和最短路径,可以使用简单的功能增强来生成GNN。相比之下,每当GNN无法生成具有相同特征的置换函数时,著名的Weisfeiler-Lehman图形测试就会失败。我们分析的核心是一种新的表示定理,它标识了GNN的基础函数。这使我们能够将目标图函数的属性转化为GNN聚合函数的属性。
translated by 谷歌翻译
本文探讨了可变参数化模型系列的线性回归的概括性损失,包括在参数化和过度参数化的模型中。我们表明,泛化曲线可以具有任意数量的峰值,而且可以明确地控制这些峰的位置。我们的结果突出了经典U形泛化曲线和最近观察到的双下降曲线的事实不是模型系列的内在特性。相反,它们的出现是由于数据的性质与学习算法的感应偏差之间的相互作用。
translated by 谷歌翻译
Performance metrics-driven context caching has a profound impact on throughput and response time in distributed context management systems for real-time context queries. This paper proposes a reinforcement learning based approach to adaptively cache context with the objective of minimizing the cost incurred by context management systems in responding to context queries. Our novel algorithms enable context queries and sub-queries to reuse and repurpose cached context in an efficient manner. This approach is distinctive to traditional data caching approaches by three main features. First, we make selective context cache admissions using no prior knowledge of the context, or the context query load. Secondly, we develop and incorporate innovative heuristic models to calculate expected performance of caching an item when making the decisions. Thirdly, our strategy defines a time-aware continuous cache action space. We present two reinforcement learning agents, a value function estimating actor-critic agent and a policy search agent using deep deterministic policy gradient method. The paper also proposes adaptive policies such as eviction and cache memory scaling to complement our objective. Our method is evaluated using a synthetically generated load of context sub-queries and a synthetic data set inspired from real world data and query samples. We further investigate optimal adaptive caching configurations under different settings. This paper presents, compares, and discusses our findings that the proposed selective caching methods reach short- and long-term cost- and performance-efficiency. The paper demonstrates that the proposed methods outperform other modes of context management such as redirector mode, and database mode, and cache all policy by up to 60% in cost efficiency.
translated by 谷歌翻译
It does not matter whether it is a job interview with Tech Giants, Wall Street firms, or a small startup; all candidates want to demonstrate their best selves or even present themselves better than they really are. Meanwhile, recruiters want to know the candidates' authentic selves and detect soft skills that prove an expert candidate would be a great fit in any company. Recruiters worldwide usually struggle to find employees with the highest level of these skills. Digital footprints can assist recruiters in this process by providing candidates' unique set of online activities, while social media delivers one of the largest digital footprints to track people. In this study, for the first time, we show that a wide range of behavioral competencies consisting of 16 in-demand soft skills can be automatically predicted from Instagram profiles based on the following lists and other quantitative features using machine learning algorithms. We also provide predictions on Big Five personality traits. Models were built based on a sample of 400 Iranian volunteer users who answered an online questionnaire and provided their Instagram usernames which allowed us to crawl the public profiles. We applied several machine learning algorithms to the uniformed data. Deep learning models mostly outperformed by demonstrating 70% and 69% average Accuracy in two-level and three-level classifications respectively. Creating a large pool of people with the highest level of soft skills, and making more accurate evaluations of job candidates is possible with the application of AI on social media user-generated data.
translated by 谷歌翻译
Vision transformers (ViTs) are quickly becoming the de-facto architecture for computer vision, yet we understand very little about why they work and what they learn. While existing studies visually analyze the mechanisms of convolutional neural networks, an analogous exploration of ViTs remains challenging. In this paper, we first address the obstacles to performing visualizations on ViTs. Assisted by these solutions, we observe that neurons in ViTs trained with language model supervision (e.g., CLIP) are activated by semantic concepts rather than visual features. We also explore the underlying differences between ViTs and CNNs, and we find that transformers detect image background features, just like their convolutional counterparts, but their predictions depend far less on high-frequency information. On the other hand, both architecture types behave similarly in the way features progress from abstract patterns in early layers to concrete objects in late layers. In addition, we show that ViTs maintain spatial information in all layers except the final layer. In contrast to previous works, we show that the last layer most likely discards the spatial information and behaves as a learned global pooling operation. Finally, we conduct large-scale visualizations on a wide range of ViT variants, including DeiT, CoaT, ConViT, PiT, Swin, and Twin, to validate the effectiveness of our method.
translated by 谷歌翻译
Machine learning algorithms have revolutionized different fields, including natural language processing, computer vision, signal processing, and medical data processing. Despite the excellent capabilities of machine learning algorithms in various tasks and areas, the performance of these models mainly deteriorates when there is a shift in the test and training data distributions. This gap occurs due to the violation of the fundamental assumption that the training and test data are independent and identically distributed (i.i.d). In real-world scenarios where collecting data from all possible domains for training is costly and even impossible, the i.i.d assumption can hardly be satisfied. The problem is even more severe in the case of medical images and signals because it requires either expensive equipment or a meticulous experimentation setup to collect data, even for a single domain. Additionally, the decrease in performance may have severe consequences in the analysis of medical records. As a result of such problems, the ability to generalize and adapt under distribution shifts (domain generalization (DG) and domain adaptation (DA)) is essential for the analysis of medical data. This paper provides the first systematic review of DG and DA on functional brain signals to fill the gap of the absence of a comprehensive study in this era. We provide detailed explanations and categorizations of datasets, approaches, and architectures used in DG and DA on functional brain images. We further address the attention-worthy future tracks in this field.
translated by 谷歌翻译